27 research outputs found

    Toward Sustainable IoT Applications: Unique Challenges for Programming the Batteryless Edge

    Get PDF
    The advent of ultra-low-power computer systems has enabled intermittently powered, battery-free devices to operate using harvested ambient energy. We present a roadmap from today’s continuously powered Internet of Things devices to tomorrow’s battery-free devices that highlights challenges for those running intermittent programs.acceptedVersio

    UDAVA: an unsupervised learning pipeline for sensor data validation in manufacturing

    Get PDF
    Manufacturing has enabled the mechanized mass production of the same (or similar) products by replacing craftsmen with assembly lines of machines. The quality of each product in an assembly line greatly hinges on continual observation and error compensation during machining using sensors that measure quantities such as position and torque of a cutting tool and vibrations due to possible imperfections in the cutting tool and raw material. Patterns observed in sensor data from a (near-)optimal production cycle should ideally recur in subsequent production cycles with minimal deviation. Manually labeling and comparing such patterns is an insurmountable task due to the massive amount of streaming data that can be generated from a production process. We present UDAVA, an unsupervised machine learning pipeline that automatically discovers process behavior patterns in sensor data for a reference production cycle. UDAVA performs clustering of reduced dimensionality summary statistics of raw sensor data to enable high-speed clustering of dense time-series data. It deploys the model as a service to verify batch data from subsequent production cycles to detect recurring behavior patterns and quantify deviation from the reference behavior. We have evaluated UDAVA from an AI Engineering perspective using two industrial case studies.publishedVersio

    Uncertainty-aware Virtual Sensors for Cyber-Physical Systems

    Get PDF
    Abstract—Virtual sensors in Cyber-Physical Systems (CPS) are AI replicas of physical sensors that can mimic their behavior by processing input data from other sensors monitoring the same system. However, we cannot always trust these replicas due to uncertainty ensuing from changes in environmental conditions, measurement errors, model structure errors, and unknown input data. An awareness of numerical uncertainty in these models can help ignore some predictions and communicate limitations for responsible action. We present a data pipeline to train and deploy uncertainty-aware virtual sensors in CPS. Our virtual sensor based on a Bayesian Neural Network (BNN) predicts the expected values of a physical sensor and a standard deviation indicating the degree of uncertainty in its predictions. We discuss how this uncertainty awareness bolsters trustworthy AI using a vibration-sensing virtual sensor in automotive manufacturing.Acknowledgement The work has been conducted as part of the InterQ project (958357) and the DAT4.ZERO project (958363) funded by the European Commission within the Horizon 2020 research and innovation programme

    Experimental Evaluation of a Tool for Change Impact Prediction in Requirements Models: Design, Results and Lessons Learned

    Get PDF
    There are commercial tools like IBM Rational RequisitePro and DOORS that support semi-automatic change impact analysis for requirements. These tools capture the requirements relations and allow tracing the paths they form. In most of these tools, relation types do not say anything about the meaning of the relations except the direction. When a change is introduced to a requirement, the requirements engineer analyzes the impact of the change in related requirements. In case semantic information is missing to determine precisely how requirements are related to each other, the requirements engineer generally has to assume the worst case dependencies based on the available syntactic information only. We developed a tool that uses formal semantics of requirements relations to support change impact analysis and prediction in requirements models. The tool TRIC (Tool for Requirements Inferencing and Consistency checking) works on models that explicitly represent requirements and the relations among them with their formal semantics. In this paper we report on the evaluation of how TRIC improves the quality of change impact predictions. A quasiexperiment is systematically designed and executed to empirically validate the impact of TRIC. We conduct the quasi-experiment with 21 master’s degree students predicting change impact for five change scenarios in a real software requirements specification. The participants are assigned with Microsoft Excel, IBM RequisitePro or TRIC to perform change impact prediction for the change scenarios. It is hypothesized that using TRIC would positively impact the quality of change impact predictions. Two formal hypotheses are developed. As a result of the experiment, we are not able to reject the null hypotheses, and thus we are not able to show experimentally the effectiveness of our tool. In the paper we discuss reasons for the failure to reject the null hypotheses in the experiment

    Taming Data Quality in AI-Enabled Industrial Internet of Things

    Get PDF
    We address the problem of taming data quality in artificial intelligence (AI)-enabled Industrial Internet of Things systems by devising machine learning pipelines as part of a decentralized edge-to-cloud architecture. We present the design and deployment of our approach from an AI engineering perspective using two industrial case studies.acceptedVersio

    Modeling security and privacy requirements: A use case-driven approach

    Get PDF
    Context: Modern internet-based services, ranging from food-delivery to home-caring, leverage the availability of multiple programmable devices to provide handy services tailored to end-user needs. These services are delivered through an ecosystem of device-specific software components and interfaces (e.g., mobile and wearable device applications). Since they often handle private information (e.g., location and health status), their security and privacy requirements are of crucial importance. Defining and analyzing those requirements is a significant challenge due to the multiple types of software components and devices integrated into software ecosystems. Each software component presents peculiarities that often depend on the context and the devices the component interact with, and that must be considered when dealing with security and privacy requirements. Objective: In this paper, we propose, apply, and assess a modeling method that supports the specification of security and privacy requirements in a structured and analyzable form. Our motivation is that, in many contexts, use cases are common practice for the elicitation of functional requirements and should also be adapted for describing security requirements. Method: We integrate an existing approach for modeling security and privacy requirements in terms of security threats, their mitigations, and their relations to use cases in a misuse case diagram. We introduce new security-related templates, i.e., a mitigation template and a misuse case template for specifying mitigation schemes and misuse case specifications in a structured and analyzable manner. Natural language processing can then be used to automatically report inconsistencies among artifacts and between the templates and specifications. Results: We successfully applied our approach to an industrial healthcare project and report lessons learned and results from structured interviews with engineers. Conclusion: Since our approach supports the precise specification and analysis of security threats, threat scenarios and their mitigations, it also supports decision making and the analysis of compliance to standards

    ETAP: Energy-aware Timing Analysis of Intermittent Programs

    Get PDF
    Energy harvesting battery-free embedded devices rely only on ambient energy harvesting that enables stand-alone and sustainable IoT applications. These devices execute programs when the harvested ambient energy in their energy reservoir is sufficient to operate and stop execution abruptly (and start charging) otherwise. These intermittent programs have varying timing behavior under different energy conditions, hardware configurations, and program structures. This paper presents Energy-aware Timing Analysis of intermittent Programs (ETAP), a probabilistic symbolic execution approach that analyzes the timing and energy behavior of intermittent programs at compile time. ETAP symbolically executes the given program while taking time and energy cost models for ambient energy and dynamic energy consumption into account. We evaluated ETAP on several intermittent programs and compared the compile-time analysis results with executions on real hardware. The results show that ETAP's normalized prediction accuracy is 99.5%, and it speeds up the timing analysis by at least two orders of magnitude compared to manual testing.Comment: Corrected typos in the previous submissio

    Virtual sensors for erroneous data repair in manufacturing a machine learning pipeline

    Get PDF
    Manufacturing converts raw materials into finished products using machine tools for controlled material removal or deposition. It can be observed using sensors installed within and around machine tools. These sensors measure quantities, such as vibrations, cutting forces, temperature, currents, power consumption, and acoustic emission, to diagnose defects and enable zero-defect manufacturing as part of the Industry 4.0 vision. The continuity of high-quality sensor data streams is fundamental to predicting phenomena, such as geometric deformations, surface roughness, excessive coolant use, and imminent tool wear with adequate accuracy and appropriate timing. However, in practice, data acquired by some sensors can be of poor quality and unsuitable for prediction due to sensor faults stemming from environmental factors. In this paper, we answer if we can repair erroneous data in a faulty sensor based on data simultaneously available in redundant sensors that observe the same process. We present a machine learning pipeline to synthesize virtual sensors that can step in for faulty sensors to maintain reasonable quality and continuity in sensor data streams. We have validated the synthesized virtual sensors in four industrial case studies.publishedVersio
    corecore